11 research outputs found

    Morality, Uncertainty

    Get PDF
    Non-Consequentialist moral theories posit the existence of moral constraints: prohibitions on performing particular kinds of wrongful acts, regardless of the good those acts could produce. Many believe that such theories cannot give satisfactory verdicts about what we morally ought to do when there is some probability that we will violate a moral constraint. In this article, I defend Non-Consequentialist theories from this critique. Using a general choice-theoretic framework, I identify various types of Non-Consequentialism that have otherwise been conflated in the debate. I then prove a number of formal possibility and impossibility results establishing which types of Non-Consequentialism can -- and which cannot -- give us adequate guidance through through a risky world

    The Problem of Ignorance

    Get PDF
    Holly Smith (2014) contends that subjective deontological theories – those that hold that our moral duties are sensitive to our beliefs about our situation – cannot correctly determine whether one ought to gather more information before acting. Against this contention, I argue that deontological theories can use a decision-theoretic approach to evaluating the moral importance of information. I then argue that this approach compares favourably with an alternative approach proposed by Philip Swenson (2016)

    Morality Under Risk

    Get PDF
    Many argue that absolutist moral theories -- those that prohibit particular kinds of actions or trade-offs under all circumstances -- cannot adequately account for the permissibility of risky actions. In this dissertation, I defend various versions of absolutism against this critique, using overlooked resources from formal decision theory. Against the prevailing view, I argue that almost all absolutist moral theories can give systematic and plausible verdicts about what to do in risky cases. In doing so, I show that critics have overlooked: (1) the fact that absolutist theories -- and moral theories, more generally -- underdetermine their formal decision-theoretic representations; (2) that decision theories themselves can be generalised to better accommodate distinctively absolutist commitments. Overall, this dissertation demonstrates that we can navigate a risky world without compromising our moral commitments

    Just Probabilities

    Get PDF
    I defend the thesis that legal standards of proof are reducible to thresholds of probability. Many have rejected this thesis because it seems to entail that defendants can be found liable solely on the basis of statistical evidence. I argue that this inference is invalid. I do so by developing a view, called Legal Causalism, that combines Thomson's (1986) causal analysis of evidence with recent work in formal theories of causal inference. On this view, legal standards of proof can be reduced to probabilities, but deriving these probabilities involves more than just statistics

    Liberalism and Automated Injustice

    Get PDF
    Many of the benefits and burdens we might experience in our lives — from bank loans to bail terms — are increasingly decided by institutions relying on algorithms. In a sense, this is nothing new: algorithms — instructions whose steps can, in principle, be mechanically executed to solve a decision problem — are at least as old as allocative social institutions themselves. Algorithms, after all, help decision-makers to navigate the complexity and variation of whatever domains they are designed for. In another sense, however, this development is startlingly new: not only are algorithms being deployed in ever more social contexts, they are being mechanically executed not merely in principle, but pervasively in practice. Due to recent advances in computing technology, the benefits and burdens we experience in our lives are now increasingly decided by automata, rather than each other. How are we to morally assess these technologies? In this chapter, I propose a preliminary conceptual schema for identifying and locating the various injustices of automated, algorithmic social decision systems, from a broadly liberal perspective

    Proceed with Caution

    Get PDF
    It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty

    Proceed with Caution

    Get PDF
    It is becoming more common that the decision-makers in private and public institutions are predictive algorithmic systems, not humans. This article argues that relying on algorithmic systems is procedurally unjust in contexts involving background conditions of structural injustice. Under such nonideal conditions, algorithmic systems, if left to their own devices, cannot meet a necessary condition of procedural justice, because they fail to provide a sufficiently nuanced model of which cases count as relevantly similar. Resolving this problem requires deliberative capacities uniquely available to human agents. After exploring the limitations of existing formal algorithmic fairness strategies, the article argues that procedural justice requires that human agents relying wholly or in part on algorithmic systems proceed with caution: by avoiding doxastic negligence about algorithmic outputs, by exercising deliberative capacities when making similarity judgments, and by suspending belief and gathering additional information in light of higher-order uncertainty

    Moral priorities under risk

    No full text
    Many moral theories are committed to the idea that some kinds of moral considerations should be respected, whatever the cost to ‘lesser’ types of considerations. A person’s life, for instance, should not be sacrificed for the trivial pleasures of others, no matter how many would benefit. However, according to the decision-theoretic critique of lexical priority theories, accepting lexical priorities inevitably leads us to make unacceptable decisions in risky situations. It seems that to operate in a risky world, we must reject lexical priorities altogether. This paper argues that lexical priority theories can, in fact, offer satisfactory guidance in risky situations. It does so by equipping lexical priority theories with overlooked resources from decision theor

    Morality Under Risk

    Get PDF
    Many argue that absolutist moral theories – those that prohibit particular kinds of actions or trade-o↵s under all circumstances – cannot adequately account for the permissibility of risky actions. In this dissertation, I defend various versions of absolutism against this critique, using overlooked resources from formal decision theory. Against the prevailing view, I argue that almost all absolutist moral theories can give systematic and plausible verdicts about what to do in risky cases. In doing so, I show that critics have overlooked: (1) the fact that absolutist theories – and moral theories, more generally – underdetermine their formal decision-theoretic representations; (2) that decision theories themselves can be generalised to better accommodate distinctively absolutist commitments. Overall, this dissertation demonstrates that we can navigate a risky world without compromising our moral commitments

    The Problem of Ignorance

    No full text
    corecore